EN FR
EN FR


Section: New Results

pWCET Estimation: a System Concern

Participants : Irina-Mariuca Asavoae, Mihail Asavoae, Slim Ben-Amor, Antoine Bertout, Liliana Cucu, Adriana Gogonel, Tomasz Kloda, Cristian Maxim, Walid Talaboulma.

From modelling to time validation, the design of an embedded system may benefit from a better utilisation of probabilities while providing means to prove their results. The arrival of new complex processors has made the time analysis of the programs more difficult while there is a growing need to integrate uncertainties from all levels of the embedded systems design. Probabilistic and statistical approaches are one possible solution and they require appropriate proofs in order to be accepted by both scientific community and industry. Such proofs cannot be limited at processor or program level and we plead for a system approach in order to take into account the possible interactions between different design levels by using the probabilistic formulation as compositional principle.

Our first arguments are provided by a valid statistical estimation of bounds on the execution time of a program on a processor. More precisely, the probabilistic worst-case execution time (pWCET) 𝒞 of a program is an upper bound on all possible probabilistic execution times 𝒞i for all possible execution scenarios Si,i1. According to EVT if the maximum of execution times of a program converges, then this maximum of the execution times 𝒞i,i1 converges to one of the three possible Generalized Extreme Value (GEV) laws: Fréchet, Weibull and Gumbel corresponding to a shape parameter ξ>0, ξ<0, and ξ=0, respectively. EVT has two different formulations: Generalized Extreme Value (GEV) and Generalized Pareto Distribution (GPD) and the difference between them is the way the extreme values are selected. GEV is based on the block maxima reasoning, grouping execution times by chronological groups (called blocks) and only the largest value of each group is considered as an extreme value. GPD is a method based on the threshold approach that considers only the values larger than the chosen threshold as extreme values. The voting procedure is based on the utilization of the both formulations of the EVT.

  • Block size estimation : The GEV models obtained for different block sizes (BS), BS from 10 to n10 are compared, where n is the cardinal of the trace of execution times. We compare the models fitting the extreme values corresponding to each choice of BS and the evolution of the shape parameter function of BS. We keep the BS that assures the best compromise between fitting the data and having a shape parameter within a stability interval of a range of shape parameters estimates. The way GEV models fit the data is analyzed within the tool by using a graphical method including the qqplot and the return level plot. We keep the GEV model corresponding to the shape parameter as the result of the aforementioned compromise and we compute the pWCET as the 1-CDF (inverse of the cumulative distribution function) of the GEV.

  • Threshold level estimation : The procedure is similar to the GEV procedure. All GPD models obtained for different threshold levels from 80% to 99%, are compared. In the same way as for GEV, we compare the models fitting the extreme values corresponding to each threshold and the evolution of the shape parameter function of threshold. At the end we keep the threshold level assuring the best compromise between fitting the data (graphical method) and having the shape parameter within a stability interval of a range of shape parameters estimates. We also consider the mean residual life plot (mean of excess) that may be consulted in case of a doubt between two different thresholds, we will prefer the threshold level such that the curve of mean of excess experiences linearity. We keep the GPD model corresponding to the shape parameter resulting from the aforementioned compromise and we compute the pWCET as the 1-CDF of the GPD.

  • Comparing GEV and GPD pWCET estimates : The comparison of the pWCET obtained with both methods, GEV and GPD is done graphically. Superposing the two curves allows to compare the distance between the two distributions. If an important difference is noticed, other GEV/GPD models are tested. In such cases calculating the pWCET estimate as a combination of GEV and GPD results is also recommended. A joint pWCET estimate is obtained by choosing for each probability the largest value between GEV and GPD . The tool implementing this method is available on line at inria-rscript.serveftp.com (requires a secured connection to be provided under request) [8].

  • Conditions of use : The application of EVT requires to verify that the analyzed data are identically distributed, i.e., the execution times are following the same (unknown) probability distribution. That condition is tested before the analysis is started, and data is treated according to the test results. Another EVT applicability condition is the independence of the data. That condition is not mandatory in the sense that non-independent data can be analyzed. The case of dependent data can be split in two sub cases. The first one is where there are dependencies within the data, still the picked extremes values are independent. In that case the analysis will be done in the same way as for the independent data. The second case is the one where there are dependencies also between the extreme values. In that case one more step is added in the procedure. This step is the de-clustering process before applying GPD and the use of the index while GEV is applied.

During the second year of PhD thesis of Talaboulma Walid, we continued exploring solutions to WCET (Worst Case Execution Time) estimation and Real Time Scheduling on multiprocessors. WCET analysis done on a monoprocessor system (in isolation) can no longer be trusted to be accurate when we run our tasks on a multiprocessor (two processors), the problem of Co-runner interference arises and this is due to contention in shared hardware, two processors share the same memory and contention will occur when a simultaneous access is done, thus delaying one of the request, and this can counter-intuitively make programs run longer in a multiprocessor than what the analysis predicted on a monoprocessor, leading to deadline misses. In [20] authors evaluate explicit reservation of cache memory to reduce the cache-related preemption delay observed when tasks share a cache in a preemptive multitasking hard real-time system. Another solution is presented in [19] by management of tasks shared resources access using performance counter to stop tasks when they exceed their allocated budget (for instance cache misses) and thus providing guarantees on global memory bandwidths, moreover in [15] some offline analysis is done using heuristics to find optimal time triggered schedules for shared memory access.

We propose in our work to generate programs memory access profile, that we obtain by running tasks on a cycle accurate System Simulator, with a precise cycle accurate model of DDRAM memory controller and a full model of memory hierarchy including caches and main memory devices, and we log every memory event that occurs inside the simulation, our approach doesn't necessitate modifications of software layer, or recompilation of task code First we focus on simple tasks with few branches and simple memory access patterns as a proof of concept, and we choose a COTS (component of the shelf) platform with two complex processor cores. We intend to loosen those constraints when our analysis is matured. We use those profiles to account for co runners interference and add it to WCET value obtained in isolation, and then update our schedule, we can also insert idle times at correct scheduling events to decrease this interference, and in the future use a modified memory management system to pre-load specific memory areas into the cache and thus slide those access back in time to eliminate simultaneous memory access and converge toward an isolation WCET value.